How the Brain Models the World

 

THE PREDICTIVE MACHINE

How the Brain Models the World —

and What That Means for How We Learn, Engineer, and Grow

 

Joseph P. McFadden, Sr.

The Holistic Analyst

McFaddenCAE.com

2026


 


A Question That Sounds Simple

 

When you look at a coffee cup on a table across the room, what are you actually doing?

The obvious answer is: you are seeing it. Light enters the eye, travels as an electrical signal to the visual cortex, and you perceive the cup. Simple, passive, like a camera taking a picture.

But that is not what is happening. Not even close.

Your brain has already generated a prediction of what is on that table — based on everything it knows about tables, about the room you are in, about coffee cups and the contexts in which they appear — before the light from the cup has even finished traveling to your retina. What you call seeing is not the arrival of information. It is the confirmation or correction of a prediction that was already in place.

You are not observing the world. You are modeling it. Constantly. Automatically. At every moment of your waking life.

This idea sits at the heart of one of the most important frameworks in contemporary neuroscience: predictive coding, predictive processing, active inference. The scientist most closely associated with its mathematical formalization is Karl Friston at University College London, whose work on what he calls the free energy principle has become one of the most cited frameworks in all of neuroscience and cognitive science. But the intuition behind it is older — something many of us have sensed without having the language for it.

The brain is not a passive recording device. It is an active prediction machine. Understanding what that means — at the level of mechanisms and implications — changes how you think about perception, learning, expertise, error, and growth. It changes how you think about engineering. And it changes how you think about the person you are becoming through the work you do.

 

The Scale of the Problem

 

The brain receives approximately eleven million bits of information per second through its sensory organs. The eyes alone contribute around ten million of those bits. Of those eleven million bits per second, the brain consciously processes approximately forty to fifty. The ratio of information available to information consciously processed is roughly two hundred thousand to one.

The question, then, is not how the brain processes information. The question is: how does it decide which fifty bits out of eleven million are worth bringing to consciousness?

The answer, according to the predictive processing framework, is this: the brain brings to consciousness the information that surprises it. The information that deviates from its predictions. The information that carries an error signal — a mismatch between what was expected and what arrived. Everything else — the vast majority of the incoming stream — is simply confirmation of what the brain already predicted. Confirmed predictions do not need to reach consciousness. They are handled automatically, efficiently, without demanding conscious attention.

Why you do not notice the weight of your clothes — until now:

Your brain predicted that weight. The sensory signals confirming the prediction were suppressed before they reached consciousness. The moment a reason to attend to that signal is introduced, the prediction changes, and the signal comes through. Consciousness is not a recording of the world. It is a register of surprises.

 

The Architecture of Prediction

 

In the predictive coding framework, the brain is understood as a hierarchical system of prediction and error. At every level of the hierarchy — from the most basic sensory processing up through the highest levels of abstract thought — the same fundamental process is occurring.

Higher levels of the hierarchy send predictions downward: this is what I expect to receive. Lower levels compare the incoming sensory signal to the prediction and send back only the discrepancy — the prediction error. The part of the signal that matched the prediction is cancelled out. It does not need to travel up the hierarchy because it carries no new information. Only the surprise propagates.

The brain is, at its computational core, a difference engine. It is constantly calculating the gap between what it expects and what it receives — and using that gap, that error signal, to update its predictions for next time.

Learning is the reduction of prediction error. Every experience that surprises you is an opportunity for the brain to update its internal models.

This is what learning is at the neural level. Every surprise that is understood — followed by a revised prediction that better fits the world — represents genuine growth in the accuracy of the brain's internal models. And every experience that goes entirely as predicted, while comfortable and efficient, teaches the brain relatively little that is new.

 

Karl Friston and the Free Energy Principle

 

Karl Friston's contribution to this framework is the free energy principle — a mathematical statement that unifies predictive coding, action, and learning under a single principle. The details are technically demanding, but the core intuition is both accessible and important.

Friston proposes that all self-organizing biological systems — including the brain, including entire organisms — act in ways that minimize what he calls free energy, which in this context is roughly equivalent to surprise. Systems that minimize surprise are systems that maintain themselves in the states they expect to be in. They resist disorder. They persist.

Two Ways to Minimize Surprise

There are two fundamental strategies available to any prediction-making system. The first is to update the internal model to better predict the world as it is — to learn, to build more accurate representations, to reduce the gap between prediction and reality by improving the prediction. The second is to act on the world to make the world conform to the predictions — to change reality to match the model, rather than changing the model to match reality.

Both strategies reduce the discrepancy between prediction and outcome. The brain uses both, constantly, often simultaneously. The dancer who has internalized the choreography acts to make their body conform to the predicted sequence. The scientist whose experiment produces unexpected results either revises the hypothesis or adjusts the experimental conditions. The engineer who finds a discrepancy between simulation and physical test either refines the model or modifies the design.

Action and learning are not separate things. They are two expressions of the same underlying drive — the drive to close the gap between the world as modeled and the world as experienced.

 

Active Inference: The Brain as Agent

 

Friston calls this active inference. The brain is not a passive receiver that occasionally sends motor commands. It is an active agent, continuously engaged in a loop of prediction, sampling, error detection, and revision. It generates hypotheses about the world, tests them through perception and action, and updates accordingly.

Active inference is not something the brain does when it is solving a problem. It is what the brain is doing all the time. Breathing is active inference. Walking is active inference. Conversation is active inference. The brain is always inferring the causes of its sensory inputs from the hidden structure of the world — and always acting to sample the sensory inputs that will best test and refine its current best model.

We are all, always, doing science. We are all, always, testing hypotheses about the world and revising them in light of evidence.

The difference between informal, unconscious model-building and formal scientific inquiry is not a difference in kind. It is a difference in rigor, in explicitness, in the systematizing of a process the brain already knows how to do. What formal education does at its best is take that natural process and make it conscious — give it vocabulary, give it methods, give it tools for generating better predictions and more precise tests of those predictions.

 

The Bayesian Brain

 

Thomas Bayes was an eighteenth century English minister and mathematician who developed a theorem about how to update beliefs in the light of new evidence. Bayes' theorem describes the optimal way to revise a prior belief given new data. The Bayesian brain hypothesis proposes that the brain operates as a Bayesian inference machine — maintaining probabilistic beliefs about the causes of its sensory inputs and updating those beliefs in a way that is mathematically consistent with Bayes' theorem.

What this means practically is that the brain does not treat all predictions equally. It weights its predictions by its confidence in them. A prediction based on extensive experience — a well-established prior — is treated as more reliable than a prediction based on sparse or ambiguous evidence. When new sensory information arrives, how much the brain updates its model depends on the relative reliability of the prior versus the new evidence.

Why Expertise Changes More Than What You Know

This is why expertise changes not just what you predict but how confidently you predict it. The novice surgeon and the expert surgeon both make predictions during a procedure. But the expert's predictions are weighted with decades of accumulated evidence — a richer prior against which to evaluate any deviation, a more sophisticated model for distinguishing genuine anomaly from expected variation.

When an experienced structural engineer looks at a stress contour plot and says that result does not look right — before running any calculation, before checking any number — they are doing something neurologically profound. They are generating a prediction from a rich internal model, comparing it to the simulation output, and detecting a discrepancy. The surprise signal fires. That intuition, which looks from the outside like a mysterious gift, is in fact the output of decades of Bayesian updating. Expertise is a prediction machine that has been extensively trained.

Why Unambiguous Feedback Accelerates Learning

The Bayesian framework also explains why learning under uncertainty is so much harder than learning under clear feedback. If the world gives ambiguous signals — if the outcome of an action could have been caused by multiple factors — the brain struggles to identify which part of its model needs updating. Unambiguous error signals are the most efficient teachers. The cleaner the feedback, the faster the model updates. This is one of the strongest arguments for deliberate practice, for well-designed simulation, and for the kind of physical testing that produces clear, interpretable results.

 

CAE as Formalized Prediction

 

Everything described so far about the brain is a structural description of what a finite element simulation does. Not by analogy. Not loosely. Structurally.

A simulation is a formalized internal model of a physical system. It encodes the best available knowledge about geometry, materials, boundary conditions, and the physics of the event. From that model it generates predictions: this is where the stress will concentrate, this is the displacement the structure will undergo, this is the acceleration the internal components will experience. These are predictions generated before the physical event occurs, from the best model the analyst can build.

Those predictions are then tested against reality — in physical testing, field observation, and correlation studies. Where the predictions match, confidence in the model grows. The prior is reinforced. Where the predictions deviate from reality — where the surprise signal arrives — the model is examined and revised. Material characterization is improved. Contact definitions are refined. Unmodeled features are added.

The gap between model and reality is not a failure of the simulation. It is the prediction error — the signal that drives the model's learning. It is the most valuable output the simulation can produce.

A simulation that perfectly predicted every test result would be extraordinary. But it would also have nothing left to teach. It is the gap that teaches. It is the surprise that advances understanding. The engineer who receives a discrepancy between model and test with curiosity — who asks what is this gap telling me about my model of this system — is the engineer whose understanding grows most rapidly through the work.

Active Inference in Product Development

The free energy principle applies directly to engineering practice. When a simulation engineer encounters a discrepancy between model and test, two strategies are available: update the model, or modify the design. Both close the gap. Both minimize the prediction error. In engineering practice, both strategies are used — often simultaneously, in an iterative process that converges toward a product that is both well-understood and well-designed.

This is active inference applied to product development. The engineer is not a passive observer of the simulation's output. The engineer is an active agent in a prediction-error-minimization loop that spans model, test, design revision, and model again. The analyst and the engineer are, together, a biological-computational prediction system working to build the most accurate possible model of how the product behaves — and to act on that model to produce the best possible design.

 

The Physiology of Active Learning

 

The Silly Putty is not a gimmick.

When a student is handed a piece of Silly Putty and asked to pull it slowly and then snap it fast, the brain is given a prediction problem. The first pull probably confirms the prediction — it is soft and compliant. The snap violates the prediction. The material that just felt like taffy now fractures like a brittle solid. Surprise. Error signal. The brain asks: what model do I need to update to account for this?

The answer — rate-dependent material behavior, the idea that the same material can respond completely differently depending on how fast it is deformed — is now attached to a sensory experience rather than a verbal description. The model that updates from that experience is richer and more durable than the model that updates from reading the words on a page. Physical experience is among the richest forms of evidence the brain can receive. The Bayesian prior built from felt experience carries more weight than the prior built from abstract description.

The composition book is not nostalgia. Writing by hand requires the brain to process, synthesize, and encode information rather than simply receive it. The motor act of writing recruits the prediction machinery in a way that typing, and certainly passive listening, does not. The student who writes what they are learning is generating a prediction — this is what I understood — and committing it to a testable form. Writing is inference.

Why Passive Learning Fails

Passive learning — sitting in a lecture, reading a textbook, watching someone else perform the analysis — does not generate predictions. Without predictions, there is no prediction error. Without prediction error, there is no update signal. Without an update signal, the internal model does not change. Information may enter working memory temporarily, but it does not become part of the durable model that constitutes genuine understanding.

Active engagement — being asked to predict what will happen before seeing the answer, being asked to explain why the result looks the way it does, being asked to propose what design change would address the finding — forces the brain to generate predictions. Those predictions, tested against the reality of the analysis, produce the error signals that actually update the model. That is learning that lasts.

 

Expertise, Growth, and the Compounding Model

 

The experienced engineer does not simply know more facts than the junior engineer. That is a superficial description of the difference. The experienced engineer has built richer, more accurate, more extensively calibrated internal models of how physical systems behave. They have accumulated a vast prior — weighted by decades of predictions confirmed and predictions corrected — that allows them to generate better hypotheses, ask better questions, and interpret evidence with greater precision.

Expertise is a prediction machine that has been extensively trained. And this framing has a critical implication for how engineering development should be understood and pursued. If expertise is a well-calibrated prediction machine, then the goal of engineering education — formal and informal, in school and in practice — is not the transfer of information. It is the construction of better internal models. And the construction of better internal models requires exposure to prediction error.

The Compounding Nature of Engaged Practice

The brain is not a static machine. It is a living system that continuously remodels itself in response to the prediction errors it encounters. The technical term is synaptic plasticity — the strengthening and weakening of connections between neurons based on experience. The functional description is simpler: the brain is always in the process of becoming.

Every prediction made and tested. Every simulation run and compared to physical evidence. Every design review where results that are not fully understood prompt the question that reveals the mechanism. Every step outside the comfortable boundary of the current model and into the territory where predictions break down — each of these experiences is an opportunity for the brain to become something it was not before.

This compounds. The engineer whose practice consistently involves generating predictions and receiving honest feedback about where those predictions fall short builds an internal model that grows in accuracy and richness year over year. The engineer whose practice involves executing familiar procedures without generating and testing new predictions may accumulate years of experience without accumulating equivalent depth of understanding. Time in the field is necessary but not sufficient. Engaged prediction is what turns time into expertise.

 

The Gap as Gift

 

The gap — the uncomfortable space where the model does not fit the reality — is not just professionally important. It is personally important. The gaps in understanding are where growth lives. The orientation that distinguishes the engineer who compounds their understanding from the one who does not is the orientation toward the gap rather than away from it.

Being curious about the surprise rather than defensive about the prediction that failed. Receiving the error signal with interest rather than anxiety. Asking what is this gap telling me about my model rather than how do I explain away this discrepancy. These are not personality traits. They are learnable orientations. Practices that can be cultivated deliberately, in every simulation review, every test correlation, every conversation between analyst and engineer.

The engineer who is afraid to be wrong is the engineer whose model stops growing. The engineer who is curious about being wrong is the engineer who compounds their understanding with every project.

When a design succeeds — when the product survives the drop, when the structure performs as specified — it is not just a technical achievement. It is a confirmed prediction. The brain of the engineer who made that prediction is updated: this is what good looks like in this domain. When the design fails — when the glass breaks at a load that should have been safe, when the simulation said pass and the test said fail — it is not just a setback. It is the most valuable data point the engineer's model will ever receive. The error signal that, received with curiosity rather than defensiveness, produces the largest update to the internal model.

 

Conscious Modeling: The Practice That Compounds

 

We are all prediction machines. The brain sees to that. What we are not, by default, is conscious prediction machines. The default setting is automatic — predictions generated without awareness, errors corrected without reflection, models updated without intention.

What engineering education can offer, at its deepest level, is the shift from unconscious to conscious modeling. The ability to say: here is my prediction. Here is my model. Here is where I expect the prediction to break down, and here is how I will test it. Here is the gap I found. Here is what I believe the gap is telling me. Here is how I am updating the model.

That practice — explicit, intentional, reflective — is what separates the engineer who grows continuously from the engineer who accumulates years without accumulating depth. It is available regardless of how much experience already exists. It is available at the first simulation review of a career and at the thousandth. It is the practice, not the credential, that builds the model.

The brain updating its models through experience. The simulation updating its models through testing. The engineer growing through engagement. It is all the same process. And it never really ends.

You are a prediction machine. You always have been. The world has been training your model since the moment you were born — through every reach and fall, every experiment and failure, every surprise and recovery. You did not choose to be a modeler of the world. Your brain made that choice for you, before you had words for it.

What you can choose is how consciously you engage with that process. How deliberately you seek out the gaps. How honestly you receive the error signals. How rigorously you update rather than defend. The practice is available right now. It has always been available. The only question is whether you are using it.

 

— End —

 

© 2026 Joseph P. McFadden, Sr.  |  The Holistic Analyst  |  McFaddenCAE.com

Freely shared for the engineering community. Not for resale.